Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 14.400
Filtrar
Más filtros











Intervalo de año de publicación
1.
Int J Oral Sci ; 16(1): 34, 2024 May 08.
Artículo en Inglés | MEDLINE | ID: mdl-38719817

RESUMEN

Accurate segmentation of oral surgery-related tissues from cone beam computed tomography (CBCT) images can significantly accelerate treatment planning and improve surgical accuracy. In this paper, we propose a fully automated tissue segmentation system for dental implant surgery. Specifically, we propose an image preprocessing method based on data distribution histograms, which can adaptively process CBCT images with different parameters. Based on this, we use the bone segmentation network to obtain the segmentation results of alveolar bone, teeth, and maxillary sinus. We use the tooth and mandibular regions as the ROI regions of tooth segmentation and mandibular nerve tube segmentation to achieve the corresponding tasks. The tooth segmentation results can obtain the order information of the dentition. The corresponding experimental results show that our method can achieve higher segmentation accuracy and efficiency compared to existing methods. Its average Dice scores on the tooth, alveolar bone, maxillary sinus, and mandibular canal segmentation tasks were 96.5%, 95.4%, 93.6%, and 94.8%, respectively. These results demonstrate that it can accelerate the development of digital dentistry.


Asunto(s)
Tomografía Computarizada de Haz Cónico , Tomografía Computarizada de Haz Cónico/métodos , Humanos , Proceso Alveolar/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Inteligencia Artificial , Seno Maxilar/diagnóstico por imagen , Seno Maxilar/cirugía , Mandíbula/diagnóstico por imagen , Mandíbula/cirugía , Diente/diagnóstico por imagen
2.
Cancer Imaging ; 24(1): 60, 2024 May 09.
Artículo en Inglés | MEDLINE | ID: mdl-38720391

RESUMEN

BACKGROUND: This study systematically compares the impact of innovative deep learning image reconstruction (DLIR, TrueFidelity) to conventionally used iterative reconstruction (IR) on nodule volumetry and subjective image quality (IQ) at highly reduced radiation doses. This is essential in the context of low-dose CT lung cancer screening where accurate volumetry and characterization of pulmonary nodules in repeated CT scanning are indispensable. MATERIALS AND METHODS: A standardized CT dataset was established using an anthropomorphic chest phantom (Lungman, Kyoto Kaguku Inc., Kyoto, Japan) containing a set of 3D-printed lung nodules including six diameters (4 to 9 mm) and three morphology classes (lobular, spiculated, smooth), with an established ground truth. Images were acquired at varying radiation doses (6.04, 3.03, 1.54, 0.77, 0.41 and 0.20 mGy) and reconstructed with combinations of reconstruction kernels (soft and hard kernel) and reconstruction algorithms (ASIR-V and DLIR at low, medium and high strength). Semi-automatic volumetry measurements and subjective image quality scores recorded by five radiologists were analyzed with multiple linear regression and mixed-effect ordinal logistic regression models. RESULTS: Volumetric errors of nodules imaged with DLIR are up to 50% lower compared to ASIR-V, especially at radiation doses below 1 mGy and when reconstructed with a hard kernel. Also, across all nodule diameters and morphologies, volumetric errors are commonly lower with DLIR. Furthermore, DLIR renders higher subjective IQ, especially at the sub-mGy doses. Radiologists were up to nine times more likely to score the highest IQ-score to these images compared to those reconstructed with ASIR-V. Lung nodules with irregular margins and small diameters also had an increased likelihood (up to five times more likely) to be ascribed the best IQ scores when reconstructed with DLIR. CONCLUSION: We observed that DLIR performs as good as or even outperforms conventionally used reconstruction algorithms in terms of volumetric accuracy and subjective IQ of nodules in an anthropomorphic chest phantom. As such, DLIR potentially allows to lower the radiation dose to participants of lung cancer screening without compromising accurate measurement and characterization of lung nodules.


Asunto(s)
Aprendizaje Profundo , Neoplasias Pulmonares , Nódulos Pulmonares Múltiples , Fantasmas de Imagen , Dosis de Radiación , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Nódulos Pulmonares Múltiples/diagnóstico por imagen , Nódulos Pulmonares Múltiples/patología , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/patología , Nódulo Pulmonar Solitario/diagnóstico por imagen , Nódulo Pulmonar Solitario/patología , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Procesamiento de Imagen Asistido por Computador/métodos
3.
F1000Res ; 13: 274, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38725640

RESUMEN

Background: The most recent advances in Computed Tomography (CT) image reconstruction technology are Deep learning image reconstruction (DLIR) algorithms. Due to drawbacks in Iterative reconstruction (IR) techniques such as negative image texture and nonlinear spatial resolutions, DLIRs are gradually replacing them. However, the potential use of DLIR in Head and Chest CT has to be examined further. Hence, the purpose of the study is to review the influence of DLIR on Radiation dose (RD), Image noise (IN), and outcomes of the studies compared with IR and FBP in Head and Chest CT examinations. Methods: We performed a detailed search in PubMed, Scopus, Web of Science, Cochrane Library, and Embase to find the articles reported using DLIR for Head and Chest CT examinations between 2017 to 2023. Data were retrieved from the short-listed studies using Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) guidelines. Results: Out of 196 articles searched, 15 articles were included. A total of 1292 sample size was included. 14 articles were rated as high and 1 article as moderate quality. All studies compared DLIR to IR techniques. 5 studies compared DLIR with IR and FBP. The review showed that DLIR improved IQ, and reduced RD and IN for CT Head and Chest examinations. Conclusions: DLIR algorithm have demonstrated a noted enhancement in IQ with reduced IN for CT Head and Chest examinations at lower dose compared with IR and FBP. DLIR showed potential for enhancing patient care by reducing radiation risks and increasing diagnostic accuracy.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Cabeza , Dosis de Radiación , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Cabeza/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Tórax/diagnóstico por imagen , Radiografía Torácica/métodos , Relación Señal-Ruido
4.
Nat Commun ; 15(1): 3942, 2024 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-38729933

RESUMEN

In clinical oncology, many diagnostic tasks rely on the identification of cells in histopathology images. While supervised machine learning techniques necessitate the need for labels, providing manual cell annotations is time-consuming. In this paper, we propose a self-supervised framework (enVironment-aware cOntrastive cell represenTation learning: VOLTA) for cell representation learning in histopathology images using a technique that accounts for the cell's mutual relationship with its environment. We subject our model to extensive experiments on data collected from multiple institutions comprising over 800,000 cells and six cancer types. To showcase the potential of our proposed framework, we apply VOLTA to ovarian and endometrial cancers and demonstrate that our cell representations can be utilized to identify the known histotypes of ovarian cancer and provide insights that link histopathology and molecular subtypes of endometrial cancer. Unlike supervised models, we provide a framework that can empower discoveries without any annotation data, even in situations where sample sizes are limited.


Asunto(s)
Neoplasias Endometriales , Neoplasias Ováricas , Humanos , Femenino , Neoplasias Endometriales/patología , Neoplasias Ováricas/patología , Aprendizaje Automático , Aprendizaje Automático Supervisado , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos
5.
Sci Rep ; 14(1): 10753, 2024 05 10.
Artículo en Inglés | MEDLINE | ID: mdl-38730248

RESUMEN

This paper proposes an approach to enhance the differentiation task between benign and malignant Breast Tumors (BT) using histopathology images from the BreakHis dataset. The main stages involve preprocessing, which encompasses image resizing, data partitioning (training and testing sets), followed by data augmentation techniques. Both feature extraction and classification tasks are employed by a Custom CNN. The experimental results show that the proposed approach using the Custom CNN model exhibits better performance with an accuracy of 84% than applying the same approach using other pretrained models, including MobileNetV3, EfficientNetB0, Vgg16, and ResNet50V2, that present relatively lower accuracies, ranging from 74 to 82%; these four models are used as both feature extractors and classifiers. To increase the accuracy and other performance metrics, Grey Wolf Optimization (GWO), and Modified Gorilla Troops Optimization (MGTO) metaheuristic optimizers are applied to each model separately for hyperparameter tuning. In this case, the experimental results show that the Custom CNN model, refined with MGTO optimization, reaches an exceptional accuracy of 93.13% in just 10 iterations, outperforming the other state-of-the-art methods, and the other four used pretrained models based on the BreakHis dataset.


Asunto(s)
Neoplasias de la Mama , Aprendizaje Profundo , Humanos , Neoplasias de la Mama/clasificación , Neoplasias de la Mama/patología , Neoplasias de la Mama/diagnóstico , Femenino , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos
6.
Sci Rep ; 14(1): 10471, 2024 05 07.
Artículo en Inglés | MEDLINE | ID: mdl-38714840

RESUMEN

Lung diseases globally impose a significant pathological burden and mortality rate, particularly the differential diagnosis between adenocarcinoma, squamous cell carcinoma, and small cell lung carcinoma, which is paramount in determining optimal treatment strategies and improving clinical prognoses. Faced with the challenge of improving diagnostic precision and stability, this study has developed an innovative deep learning-based model. This model employs a Feature Pyramid Network (FPN) and Squeeze-and-Excitation (SE) modules combined with a Residual Network (ResNet18), to enhance the processing capabilities for complex images and conduct multi-scale analysis of each channel's importance in classifying lung cancer. Moreover, the performance of the model is further enhanced by employing knowledge distillation from larger teacher models to more compact student models. Subjected to rigorous five-fold cross-validation, our model outperforms existing models on all performance metrics, exhibiting exceptional diagnostic accuracy. Ablation studies on various model components have verified that each addition effectively improves model performance, achieving an average accuracy of 98.84% and a Matthews Correlation Coefficient (MCC) of 98.83%. Collectively, the results indicate that our model significantly improves the accuracy of disease diagnosis, providing physicians with more precise clinical decision-making support.


Asunto(s)
Aprendizaje Profundo , Neoplasias Pulmonares , Redes Neurales de la Computación , Humanos , Neoplasias Pulmonares/patología , Neoplasias Pulmonares/diagnóstico , Neoplasias Pulmonares/clasificación , Carcinoma Pulmonar de Células Pequeñas/diagnóstico , Carcinoma Pulmonar de Células Pequeñas/patología , Carcinoma Pulmonar de Células Pequeñas/clasificación , Carcinoma de Células Escamosas/diagnóstico , Carcinoma de Células Escamosas/patología , Adenocarcinoma/patología , Adenocarcinoma/diagnóstico , Adenocarcinoma/clasificación , Procesamiento de Imagen Asistido por Computador/métodos , Diagnóstico Diferencial
7.
J Hematol Oncol ; 17(1): 27, 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38693553

RESUMEN

The rapid advancements in large language models (LLMs) such as ChatGPT have raised concerns about their potential impact on academic integrity. While initial concerns focused on ChatGPT's writing capabilities, recent updates have integrated DALL-E 3's image generation features, extending the risks to visual evidence in biomedical research. Our tests revealed ChatGPT's nearly barrier-free image generation feature can be used to generate experimental result images, such as blood smears, Western Blot, immunofluorescence and so on. Although the current ability of ChatGPT to generate experimental images is limited, the risk of misuse is evident. This development underscores the need for immediate action. We suggest that AI providers restrict the generation of experimental image, develop tools to detect AI-generated images, and consider adding "invisible watermarks" to the generated images. By implementing these measures, we can better ensure the responsible use of AI technology in academic research and maintain the integrity of scientific evidence.


Asunto(s)
Investigación Biomédica , Humanos , Investigación Biomédica/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Inteligencia Artificial , Programas Informáticos
8.
Sci Rep ; 14(1): 11013, 2024 05 14.
Artículo en Inglés | MEDLINE | ID: mdl-38745039

RESUMEN

Cancer Stem Cells presumably drive tumor growth and resistance to conventional cancer treatments. From a previous computational model, we inferred that these cells are not uniformly distributed in the bulk of a tumorsphere. To confirm this result, we cultivated tumorspheres enriched in stem cells, and performed immunofluorescent detection of the stemness marker SOX2 using confocal microscopy. In this article, we present an image processing method that reconstructs the amount and location of the Cancer Stem Cells in the spheroids. Its advantage is the use of a statistical criterion to classify the cells in Stem and Differentiated, instead of setting an arbitrary threshold. Moreover, the analysis of the experimental images presented in this work agrees with the results from our computational models, thus enforcing the notion that the distribution of Cancer Stem Cells in a tumorsphere is non-homogeneous. Additionally, the method presented here provides a useful tool for analyzing any image in which different kinds of cells are stained with different markers.


Asunto(s)
Células Madre Neoplásicas , Esferoides Celulares , Células Madre Neoplásicas/metabolismo , Células Madre Neoplásicas/patología , Humanos , Esferoides Celulares/patología , Esferoides Celulares/metabolismo , Factores de Transcripción SOXB1/metabolismo , Procesamiento de Imagen Asistido por Computador/métodos , Microscopía Confocal , Línea Celular Tumoral
9.
Sci Rep ; 14(1): 10395, 2024 05 06.
Artículo en Inglés | MEDLINE | ID: mdl-38710726

RESUMEN

To assess the feasibility of code-free deep learning (CFDL) platforms in the prediction of binary outcomes from fundus images in ophthalmology, evaluating two distinct online-based platforms (Google Vertex and Amazon Rekognition), and two distinct datasets. Two publicly available datasets, Messidor-2 and BRSET, were utilized for model development. The Messidor-2 consists of fundus photographs from diabetic patients and the BRSET is a multi-label dataset. The CFDL platforms were used to create deep learning models, with no preprocessing of the images, by a single ophthalmologist without coding expertise. The performance metrics employed to evaluate the models were F1 score, area under curve (AUC), precision and recall. The performance metrics for referable diabetic retinopathy and macular edema were above 0.9 for both tasks and CDFL. The Google Vertex models demonstrated superior performance compared to the Amazon models, with the BRSET dataset achieving the highest accuracy (AUC of 0.994). Multi-classification tasks using only BRSET achieved similar overall performance between platforms, achieving AUC of 0.994 for laterality, 0.942 for age grouping, 0.779 for genetic sex identification, 0.857 for optic, and 0.837 for normality with Google Vertex. The study demonstrates the feasibility of using automated machine learning platforms for predicting binary outcomes from fundus images in ophthalmology. It highlights the high accuracy achieved by the models in some tasks and the potential of CFDL as an entry-friendly platform for ophthalmologists to familiarize themselves with machine learning concepts.


Asunto(s)
Retinopatía Diabética , Fondo de Ojo , Aprendizaje Automático , Humanos , Retinopatía Diabética/diagnóstico por imagen , Femenino , Masculino , Aprendizaje Profundo , Persona de Mediana Edad , Adulto , Personal de Salud , Edema Macular/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Anciano
10.
Cancer Imaging ; 24(1): 57, 2024 May 06.
Artículo en Inglés | MEDLINE | ID: mdl-38711135

RESUMEN

BACKGROUND: PSMA PET/CT is a predictive and prognostic biomarker for determining response to [177Lu]Lu-PSMA-617 in patients with metastatic castration resistant prostate cancer (mCRPC). Thresholds defined to date may not be generalizable to newer image reconstruction algorithms. Bayesian penalized likelihood (BPL) reconstruction algorithm is a novel reconstruction algorithm that may improve contrast whilst preventing introduction of image noise. The aim of this study is to compare the quantitative parameters obtained using BPL and the Ordered Subset Expectation Maximization (OSEM) reconstruction algorithms. METHODS: Fifty consecutive patients with mCRPC who underwent [68Ga]Ga-PSMA-11 PET/CT using OSEM reconstruction to assess suitability for [177Lu]Lu-PSMA-617 therapy were selected. BPL algorithm was then used retrospectively to reconstruct the same PET raw data. Quantitative and volumetric measurements such as tumour standardised uptake value (SUV)max, SUVmean and Molecular Tumour Volume (MTV-PSMA) were calculated on both reconstruction methods. Results were compared (Bland-Altman, Pearson correlation coefficient) including subgroups with low and high-volume disease burdens (MTV-PSMA cut-off 40 mL). RESULTS: The SUVmax and SUVmean were higher, and MTV-PSMA was lower in the BPL reconstructed images compared to the OSEM group, with a mean difference of 8.4 (17.5%), 0.7 (8.2%) and - 21.5 mL (-3.4%), respectively. There was a strong correlation between the calculated SUVmax, SUVmean, and MTV-PSMA values in the OSEM and BPL reconstructed images (Pearson r values of 0.98, 0.99, and 1.0, respectively). No patients were reclassified from low to high volume disease or vice versa when switching from OSEM to BPL reconstruction. CONCLUSIONS: [68Ga]Ga-PSMA-11 PET/CT quantitative and volumetric parameters produced by BPL and OSEM reconstruction methods are strongly correlated. Differences are proportional and small for SUVmean, which is used as a predictive biomarker. Our study suggests that both reconstruction methods are acceptable without clinical impact on quantitative or volumetric findings. For longitudinal comparison, committing to the same reconstruction method would be preferred to ensure consistency.


Asunto(s)
Algoritmos , Teorema de Bayes , Isótopos de Galio , Radioisótopos de Galio , Tomografía Computarizada por Tomografía de Emisión de Positrones , Neoplasias de la Próstata Resistentes a la Castración , Humanos , Masculino , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Neoplasias de la Próstata Resistentes a la Castración/diagnóstico por imagen , Neoplasias de la Próstata Resistentes a la Castración/patología , Anciano , Persona de Mediana Edad , Estudios Retrospectivos , Oligopéptidos , Ácido Edético/análogos & derivados , Imagen de Cuerpo Entero/métodos , Radiofármacos , Anciano de 80 o más Años , Metástasis de la Neoplasia , Procesamiento de Imagen Asistido por Computador/métodos , Dipéptidos/uso terapéutico
11.
Oncotarget ; 15: 288-300, 2024 May 07.
Artículo en Inglés | MEDLINE | ID: mdl-38712741

RESUMEN

PURPOSE: Sequential PET/CT studies oncology patients can undergo during their treatment follow-up course is limited by radiation dosage. We propose an artificial intelligence (AI) tool to produce attenuation-corrected PET (AC-PET) images from non-attenuation-corrected PET (NAC-PET) images to reduce need for low-dose CT scans. METHODS: A deep learning algorithm based on 2D Pix-2-Pix generative adversarial network (GAN) architecture was developed from paired AC-PET and NAC-PET images. 18F-DCFPyL PSMA PET-CT studies from 302 prostate cancer patients, split into training, validation, and testing cohorts (n = 183, 60, 59, respectively). Models were trained with two normalization strategies: Standard Uptake Value (SUV)-based and SUV-Nyul-based. Scan-level performance was evaluated by normalized mean square error (NMSE), mean absolute error (MAE), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR). Lesion-level analysis was performed in regions-of-interest prospectively from nuclear medicine physicians. SUV metrics were evaluated using intraclass correlation coefficient (ICC), repeatability coefficient (RC), and linear mixed-effects modeling. RESULTS: Median NMSE, MAE, SSIM, and PSNR were 13.26%, 3.59%, 0.891, and 26.82, respectively, in the independent test cohort. ICC for SUVmax and SUVmean were 0.88 and 0.89, which indicated a high correlation between original and AI-generated quantitative imaging markers. Lesion location, density (Hounsfield units), and lesion uptake were all shown to impact relative error in generated SUV metrics (all p < 0.05). CONCLUSION: The Pix-2-Pix GAN model for generating AC-PET demonstrates SUV metrics that highly correlate with original images. AI-generated PET images show clinical potential for reducing the need for CT scans for attenuation correction while preserving quantitative markers and image quality.


Asunto(s)
Aprendizaje Profundo , Tomografía Computarizada por Tomografía de Emisión de Positrones , Neoplasias de la Próstata , Humanos , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Masculino , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/patología , Anciano , Persona de Mediana Edad , Glutamato Carboxipeptidasa II/metabolismo , Antígenos de Superficie/metabolismo , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Radiofármacos , Reproducibilidad de los Resultados
12.
J Biomed Opt ; 29(9): 093503, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38715717

RESUMEN

Significance: Hyperspectral dark-field microscopy (HSDFM) and data cube analysis algorithms demonstrate successful detection and classification of various tissue types, including carcinoma regions in human post-lumpectomy breast tissues excised during breast-conserving surgeries. Aim: We expand the application of HSDFM to the classification of tissue types and tumor subtypes in pre-histopathology human breast lumpectomy samples. Approach: Breast tissues excised during breast-conserving surgeries were imaged by the HSDFM and analyzed. The performance of the HSDFM is evaluated by comparing the backscattering intensity spectra of polystyrene microbead solutions with the Monte Carlo simulation of the experimental data. For classification algorithms, two analysis approaches, a supervised technique based on the spectral angle mapper (SAM) algorithm and an unsupervised technique based on the K-means algorithm are applied to classify various tissue types including carcinoma subtypes. In the supervised technique, the SAM algorithm with manually extracted endmembers guided by H&E annotations is used as reference spectra, allowing for segmentation maps with classified tissue types including carcinoma subtypes. Results: The manually extracted endmembers of known tissue types and their corresponding threshold spectral correlation angles for classification make a good reference library that validates endmembers computed by the unsupervised K-means algorithm. The unsupervised K-means algorithm, with no a priori information, produces abundance maps with dominant endmembers of various tissue types, including carcinoma subtypes of invasive ductal carcinoma and invasive mucinous carcinoma. The two carcinomas' unique endmembers produced by the two methods agree with each other within <2% residual error margin. Conclusions: Our report demonstrates a robust procedure for the validation of an unsupervised algorithm with the essential set of parameters based on the ground truth, histopathological information. We have demonstrated that a trained library of the histopathology-guided endmembers and associated threshold spectral correlation angles computed against well-defined reference data cubes serve such parameters. Two classification algorithms, supervised and unsupervised algorithms, are employed to identify regions with carcinoma subtypes of invasive ductal carcinoma and invasive mucinous carcinoma present in the tissues. The two carcinomas' unique endmembers used by the two methods agree to <2% residual error margin. This library of high quality and collected under an environment with no ambient background may be instrumental to develop or validate more advanced unsupervised data cube analysis algorithms, such as effective neural networks for efficient subtype classification.


Asunto(s)
Algoritmos , Neoplasias de la Mama , Mastectomía Segmentaria , Microscopía , Humanos , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/cirugía , Neoplasias de la Mama/patología , Femenino , Mastectomía Segmentaria/métodos , Microscopía/métodos , Mama/diagnóstico por imagen , Mama/patología , Mama/cirugía , Imágenes Hiperespectrales/métodos , Márgenes de Escisión , Método de Montecarlo , Procesamiento de Imagen Asistido por Computador/métodos
13.
J Nucl Med ; 65(Suppl 1): 54S-63S, 2024 May 06.
Artículo en Inglés | MEDLINE | ID: mdl-38719233

RESUMEN

In recent decades, researchers worldwide have directed their efforts toward enhancing the quality of PET imaging. The detection sensitivity and image resolution of conventional PET scanners with a short axial field of view have been constrained, leading to a suboptimal signal-to-noise ratio. The advent of long-axial-field-of-view PET scanners, exemplified by the uEXPLORER system, marked a significant advancement. Total-body PET imaging possesses an extensive scan range of 194 cm and an ultrahigh detection sensitivity, and it has emerged as a promising avenue for improving image quality while reducing the administered radioactivity dose and shortening acquisition times. In this review, we elucidate the application of the uEXPLORER system at the Sun Yat-sen University Cancer Center, including the disease distribution, patient selection workflow, scanning protocol, and several enhanced clinical applications, along with encountered challenges. We anticipate that this review will provide insights into routine clinical practice and ultimately improve patient care.


Asunto(s)
Tomografía Computarizada por Tomografía de Emisión de Positrones , Imagen de Cuerpo Entero , Humanos , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Imagen de Cuerpo Entero/métodos , Neoplasias/diagnóstico por imagen , Centros de Atención Terciaria , Instituciones Oncológicas , Procesamiento de Imagen Asistido por Computador/métodos
14.
PLoS One ; 19(5): e0302880, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38718092

RESUMEN

Gastrointestinal (GI) cancer is leading general tumour in the Gastrointestinal tract, which is fourth significant reason of tumour death in men and women. The common cure for GI cancer is radiation treatment, which contains directing a high-energy X-ray beam onto the tumor while avoiding healthy organs. To provide high dosages of X-rays, a system needs for accurately segmenting the GI tract organs. The study presents a UMobileNetV2 model for semantic segmentation of small and large intestine and stomach in MRI images of the GI tract. The model uses MobileNetV2 as an encoder in the contraction path and UNet layers as a decoder in the expansion path. The UW-Madison database, which contains MRI scans from 85 patients and 38,496 images, is used for evaluation. This automated technology has the capability to enhance the pace of cancer therapy by aiding the radio oncologist in the process of segmenting the organs of the GI tract. The UMobileNetV2 model is compared to three transfer learning models: Xception, ResNet 101, and NASNet mobile, which are used as encoders in UNet architecture. The model is analyzed using three distinct optimizers, i.e., Adam, RMS, and SGD. The UMobileNetV2 model with the combination of Adam optimizer outperforms all other transfer learning models. It obtains a dice coefficient of 0.8984, an IoU of 0.8697, and a validation loss of 0.1310, proving its ability to reliably segment the stomach and intestines in MRI images of gastrointestinal cancer patients.


Asunto(s)
Neoplasias Gastrointestinales , Tracto Gastrointestinal , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Neoplasias Gastrointestinales/diagnóstico por imagen , Neoplasias Gastrointestinales/patología , Tracto Gastrointestinal/diagnóstico por imagen , Semántica , Procesamiento de Imagen Asistido por Computador/métodos , Femenino , Masculino , Estómago/diagnóstico por imagen , Estómago/patología
15.
Sci Rep ; 14(1): 10412, 2024 05 06.
Artículo en Inglés | MEDLINE | ID: mdl-38710744

RESUMEN

The proposed work contains three major contribution, such as smart data collection, optimized training algorithm and integrating Bayesian approach with split learning to make privacy of the patent data. By integrating consumer electronics device such as wearable devices, and the Internet of Things (IoT) taking THz image, perform EM algorithm as training, used newly proposed slit learning method the technology promises enhanced imaging depth and improved tissue contrast, thereby enabling early and accurate disease detection the breast cancer disease. In our hybrid algorithm, the breast cancer model achieves an accuracy of 97.5 percent over 100 epochs, surpassing the less accurate old models which required a higher number of epochs, such as 165.


Asunto(s)
Algoritmos , Neoplasias de la Mama , Dispositivos Electrónicos Vestibles , Humanos , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/diagnóstico , Internet de las Cosas , Femenino , Imágen por Terahertz/métodos , Teorema de Bayes , Diagnóstico por Imagen/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático
17.
Lasers Med Sci ; 39(1): 123, 2024 May 04.
Artículo en Inglés | MEDLINE | ID: mdl-38703302

RESUMEN

Interaction of polarized light with healthy and abnormal regions of tissue reveals structural information associated with its pathological condition. Even a slight variation in structural alignment can induce a change in polarization property, which can play a crucial role in the early detection of abnormal tissue morphology. We propose a transmission-based Stokes-Mueller microscope for quantitative analysis of the microstructural properties of the tissue specimen. The Stokes-Mueller based polarization microscopy provides significant structural information of tissue through various polarization parameters such as degree of polarization (DOP), degree of linear polarization (DOLP), and degree of circular polarization (DOCP), anisotropy (r) and Mueller decomposition parameters such as diattenuation, retardance and depolarization. Further, by applying a suitable image processing technique such as Machine learning (ML) output images were analysed effectively. The support vector machine image classification model achieved 95.78% validation accuracy and 94.81% testing accuracy with polarization parameter dataset. The study's findings demonstrate the potential of Stokes-Mueller polarimetry in tissue characterization and diagnosis, providing a valuable tool for biomedical applications.


Asunto(s)
Neoplasias de la Mama , Aprendizaje Automático , Microscopía de Polarización , Humanos , Microscopía de Polarización/métodos , Neoplasias de la Mama/patología , Femenino , Máquina de Vectores de Soporte , Procesamiento de Imagen Asistido por Computador/métodos , Carcinoma Ductal de Mama/patología , Carcinoma Ductal de Mama/clasificación , Carcinoma Ductal de Mama/diagnóstico por imagen
18.
IEEE Trans Med Imaging ; 43(5): 1782-1791, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38696285

RESUMEN

The advent of metal-based drugs and metal nanoparticles as therapeutic agents in anti-tumor treatment has motivated the advancement of X-ray fluorescence computed tomography (XFCT) techniques. An XFCT imaging modality can detect, quantify, and image the biodistribution of metal elements using the X-ray fluorescence signal emitted upon X-ray irradiation. However, the majority of XFCT imaging systems and instrumentation developed so far rely on a single or a small number of detectors. This work introduces the first full-ring benchtop X-ray fluorescence emission tomography (XFET) system equipped with 24 solid-state detectors arranged in a hexagonal geometry and a 96-pinhole compound-eye collimator. We experimentally demonstrate the system's sensitivity and its capability of multi-element detection and quantification by performing imaging studies on an animal-sized phantom. In our preliminary studies, the phantom was irradiated with a pencil beam of X-rays produced using a low-powered polychromatic X-ray source (90kVp and 60W max power). This investigation shows a significant enhancement in the detection limit of gadolinium to as low as 0.1 mg/mL concentration. The results also illustrate the unique capabilities of the XFET system to simultaneously determine the spatial distribution and accurately quantify the concentrations of multiple metal elements.


Asunto(s)
Fantasmas de Imagen , Animales , Espectrometría por Rayos X/métodos , Diseño de Equipo , Procesamiento de Imagen Asistido por Computador/métodos , Ratones
19.
Sci Rep ; 14(1): 10781, 2024 05 11.
Artículo en Inglés | MEDLINE | ID: mdl-38734781

RESUMEN

Magnetic resonance (MR) acquisitions of the torso are frequently affected by respiratory motion with detrimental effects on signal quality. The motion of organs inside the body is typically decoupled from surface motion and is best captured using rapid MR imaging (MRI). We propose a pipeline for prospective motion correction of the target organ using MR image navigators providing absolute motion estimates in millimeters. Our method is designed to feature multi-nuclear interleaving for non-proton MR acquisitions and to tolerate local transmit coils with inhomogeneous field and sensitivity distributions. OpenCV object tracking was introduced for rapid estimation of in-plane displacements in 2D MR images. A full three-dimensional translation vector was derived by combining displacements from slices of multiple and arbitrary orientations. The pipeline was implemented on 3 T and 7 T MR scanners and tested in phantoms and volunteers. Fast motion handling was achieved with low-resolution 2D MR image navigators and direct implementation of OpenCV into the MR scanner's reconstruction pipeline. Motion-phantom measurements demonstrate high tracking precision and accuracy with minor processing latency. The feasibility of the pipeline for reliable in-vivo motion extraction was shown on heart and kidney data. Organ motion was manually assessed by independent operators to quantify tracking performance. Object tracking performed convincingly on 7774 navigator images from phantom scans and different organs in volunteers. In particular the kernelized correlation filter (KCF) achieved similar accuracy (74%) as scored from inter-operator comparison (82%) while processing at a rate of over 100 frames per second. We conclude that fast 2D MR navigator images and computer vision object tracking can be used for accurate and rapid prospective motion correction. This and the modular structure of the pipeline allows for the proposed method to be used in imaging of moving organs and in challenging applications like cardiac magnetic resonance spectroscopy (MRS) or magnetic resonance imaging (MRI) guided radiotherapy.


Asunto(s)
Fantasmas de Imagen , Humanos , Espectroscopía de Resonancia Magnética/métodos , Imagen por Resonancia Magnética/métodos , Respiración , Procesamiento de Imagen Asistido por Computador/métodos , Movimiento (Física) , Movimiento , Algoritmos
20.
Radiat Oncol ; 19(1): 55, 2024 May 12.
Artículo en Inglés | MEDLINE | ID: mdl-38735947

RESUMEN

BACKGROUND: Currently, automatic esophagus segmentation remains a challenging task due to its small size, low contrast, and large shape variation. We aimed to improve the performance of esophagus segmentation in deep learning by applying a strategy that involves locating the object first and then performing the segmentation task. METHODS: A total of 100 cases with thoracic computed tomography scans from two publicly available datasets were used in this study. A modified CenterNet, an object location network, was employed to locate the center of the esophagus for each slice. Subsequently, the 3D U-net and 2D U-net_coarse models were trained to segment the esophagus based on the predicted object center. A 2D U-net_fine model was trained based on the updated object center according to the 3D U-net model. The dice similarity coefficient and the 95% Hausdorff distance were used as quantitative evaluation indexes for the delineation performance. The characteristics of the automatically delineated esophageal contours by the 2D U-net and 3D U-net models were summarized. Additionally, the impact of the accuracy of object localization on the delineation performance was analyzed. Finally, the delineation performance in different segments of the esophagus was also summarized. RESULTS: The mean dice coefficient of the 3D U-net, 2D U-net_coarse, and 2D U-net_fine models were 0.77, 0.81, and 0.82, respectively. The 95% Hausdorff distance for the above models was 6.55, 3.57, and 3.76, respectively. Compared with the 2D U-net, the 3D U-net has a lower incidence of delineating wrong objects and a higher incidence of missing objects. After using the fine object center, the average dice coefficient was improved by 5.5% in the cases with a dice coefficient less than 0.75, while that value was only 0.3% in the cases with a dice coefficient greater than 0.75. The dice coefficients were lower for the esophagus between the orifice of the inferior and the pulmonary bifurcation compared with the other regions. CONCLUSION: The 3D U-net model tended to delineate fewer incorrect objects but also miss more objects. Two-stage strategy with accurate object location could enhance the robustness of the segmentation model and significantly improve the esophageal delineation performance, especially for cases with poor delineation results.


Asunto(s)
Aprendizaje Profundo , Esófago , Humanos , Esófago/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA